Architecture Based on Concurrent Engineering Approach for Allocation of Tolerances in Part and Assembly Design of Flexible Components Part
نویسندگان
چکیده
Referring to the design and implementation of large service oriented systems, two different approaches, choreography and orchestration, need to be concerned and studied. Choreography is a specification protocol defining a global picture of the way services interact with each other. Whereas orchestration is a local view focusing on the behavior of a single service. A critical issue, the so called conformance problem, is to validate whether a specific orchestration can play as a participant whose observable behavior is required by a given choreography. In this paper, we introduce two languages for describing choreography and orchestration respectively. Based on the two languages, we give a definition of endpoint projection which is used for automatic generation of orchestrations. Therefore, conformance validation is reduced to verification of process refinement between two orchestrations. Further, we mention that not all choreography models can be locally implementable. In other words, some global models cannot be translated into sets of orchestrations satisfying the global behavioral rules. To ensure that a choreography model is locally implementable, some conditions are required to be satisfied. As a consequence of our work, the skeleton codes for service implementation can be automatically generated, on the other hand, the interoperability between collaborating services is guaranteed. PDF, information on submission 4: Jun Li, Haibing Guan, Zengxiang Li and Alei Liang. A Study on Register Mapping Optimization in Dynamic Binary Translation Abstract: Dynamic binary translation technology always chooses basic blocks as the unit of translation and execution, so when context switches are frequently occurring between translation and execution, current states of source machine (such as temporary variable) must be saved in some place for reusing purpose. In traditional dynamic binary translation technology, the source machines’ states are always mapped onto target machines by using their available resources. Usually, mapping source registers onto target memory is such a convenient and easy method that almost all binary translators select the strategy without taking the fact that target registers may be fewer than source registers into account, thus leads to large numbers of nonsensical load/store operations when context switches are occurring. While mapping source registers directly onto target registers is an Dynamic binary translation technology always chooses basic blocks as the unit of translation and execution, so when context switches are frequently occurring between translation and execution, current states of source machine (such as temporary variable) must be saved in some place for reusing purpose. In traditional dynamic binary translation technology, the source machines’ states are always mapped onto target machines by using their available resources. Usually, mapping source registers onto target memory is such a convenient and easy method that almost all binary translators select the strategy without taking the fact that target registers may be fewer than source registers into account, thus leads to large numbers of nonsensical load/store operations when context switches are occurring. While mapping source registers directly onto target registers is an intuitively good optimization approach, this approach hardly gets any performance improvement in some translators such as QEMU. This paper has a study on the QEMU’s register mapping mechanism, and proposes an improved translating mechanism. A new direction of register optimization is put forward as well. PDF, information on submission 7: Cagatay Catal and Banu Diri. Object-Oriented Metrics Based Artificial Immune Systems for Software Fault Prediction Abstract: The features of real-time dependable systems are availability, reliability, safety and security. In the near future, real-time systems will be able to adapt themselves according to the specific requirements and real-time dependability assessment technique will be able to classify modules as faulty or faulty-free. Software fault prediction models help us in order to develop dependable software and they are commonly applied prior to system testing. In this study, we examine Chidamber-Kemerer (CK) metrics and some method-level metrics as the independent variables for our model that is based on Artificial Immune Recognition System (AIRS) algorithm. The dataset is a part of NASA Metrics Data Program and class-level metrics are accessed from PROMISE repository. Our focus is not validating individual metrics but applying AIRS with appropriate metrics to enhance the prediction performance. Results indicate that the conjunction of CK metrics with the lines of code (LOC) metric provides the best prediction results for our AIRS based fault prediction model. The result of this study suggests that class-level data should be used rather than traditional method-level data in order to construct fault prediction models. Furthermore, this technique can constitute a part of dependability assessment technique for the future. PDF, information on submission The features of real-time dependable systems are availability, reliability, safety and security. In the near future, real-time systems will be able to adapt themselves according to the specific requirements and real-time dependability assessment technique will be able to classify modules as faulty or faulty-free. Software fault prediction models help us in order to develop dependable software and they are commonly applied prior to system testing. In this study, we examine Chidamber-Kemerer (CK) metrics and some method-level metrics as the independent variables for our model that is based on Artificial Immune Recognition System (AIRS) algorithm. The dataset is a part of NASA Metrics Data Program and class-level metrics are accessed from PROMISE repository. Our focus is not validating individual metrics but applying AIRS with appropriate metrics to enhance the prediction performance. Results indicate that the conjunction of CK metrics with the lines of code (LOC) metric provides the best prediction results for our AIRS based fault prediction model. The result of this study suggests that class-level data should be used rather than traditional method-level data in order to construct fault prediction models. Furthermore, this technique can constitute a part of dependability assessment technique for the future. PDF, information on submission 8: keqiao yang, Songyang Ding, Rongcai Zhao and Jianmin Pang. The Recovery of Indirect Procedure Call for static Binary Translation Abstract: One of the fundamental problems with the analysis of binary executable code is that of recognizing the target addresses of indirect procedure call instructions. Without these addresses, the decoding of the machine instructions for a given program is incomplete, as well as any analysis on that program. In this paper we present a technique for recovering indirect procedure call for static binary translation. The technique is based on constructing a mapping table which has two items function address and its name, and generating a search function to search the function name in the table according to its address at the position of an indirect call instruction in a procedure. One of the fundamental problems with the analysis of binary executable code is that of recognizing the target addresses of indirect procedure call instructions. Without these addresses, the decoding of the machine instructions for a given program is incomplete, as well as any analysis on that program. In this paper we present a technique for recovering indirect procedure call for static binary translation. The technique is based on constructing a mapping table which has two items function address and its name, and generating a search function to search the function name in the table according to its address at the position of an indirect call instruction in a procedure. The presented technique has been tested on IA-64 code generated by C compilers. Our tests show that it implemented effectively. The technique was adopted in our static binary translator ITA, which can automatically translate the binary code on IA-64 to a C program first and to an Alpha binary code next. It has the advantage of code running faster in the target machine than using interpreter. PDF, information on submission 9: IRFAN ANJUM MANARVI and NEAL PETER JUSTER. A SOFTWARE ARCHITECTURE BASED ON CONCURRENT ENGINEERING APPROACH FOR ALLOCATION OF TOLERANCES IN PART AND ASSEMBLY DESIGN OF FLEXIBLE COMPONENTS PART I Abstract: A concurrent engineering approach for tolerance allocation has been investigated in this paper as a primary requirement for designers at an early design stage. An integrated tolerance synthesis model has been developed which provides a strategy to view tolerance synthesis in a global perspective. A step-by-step tolerance allocation process forms an integral part of this model and has been applied to a simple rectangular part. The case study provides a strategy of using the established theories, methods, techniques, models and algorithms, parameters like customer requirements, design, performance, materials, manufacturing, assembly, quality, cost, experience, representation and aesthetics for tolerance synthesis for part and assembly design of flexible parts in a concurrent engineering environment. This paper consists of two parts. This being Part I introduces the objectives of current research and tolerance synthesis model is buildup. The tolerance allocation process is applied to a typical case study until the manufacturing stage. Subsequent stages are covered in Part II. The architecture developed can be used as a basis for development of customized software for tolerance allocation or development of an integrated application with any other design/analysis software like IDEAS, ABAQUS or Pro Engineer Word Microsoft , information on submission A concurrent engineering approach for tolerance allocation has been investigated in this paper as a primary requirement for designers at an early design stage. An integrated tolerance synthesis model has been developed which provides a strategy to view tolerance synthesis in a global perspective. A step-by-step tolerance allocation process forms an integral part of this model and has been applied to a simple rectangular part. The case study provides a strategy of using the established theories, methods, techniques, models and algorithms, parameters like customer requirements, design, performance, materials, manufacturing, assembly, quality, cost, experience, representation and aesthetics for tolerance synthesis for part and assembly design of flexible parts in a concurrent engineering environment. This paper consists of two parts. This being Part I introduces the objectives of current research and tolerance synthesis model is buildup. The tolerance allocation process is applied to a typical case study until the manufacturing stage. Subsequent stages are covered in Part II. The architecture developed can be used as a basis for development of customized software for tolerance allocation or development of an integrated application with any other design/analysis software like IDEAS, ABAQUS or Pro Engineer Word Microsoft , information on submission 10: IRFAN ANJUM MANARVI and NEAL PETER JUSTER. A SOFTWARE ARCHITECTURE BASED ON CONCURRENT ENGINEERING APPROACH FOR ALLOCATION OF TOLERANCES IN PART AND ASSEMBLY DESIGN OF FLEXIBLE COMPONENTS PART II Abstract: A concurrent engineering approach for tolerance allocation has been investigated in this paper as a primary requirement for designers at an early design stage. An integrated tolerance synthesis model has been developed which provides a strategy to view tolerance synthesis in a global perspective. A step-by-step tolerance allocation process forms an integral part of this model and has been applied to a simple rectangular part. The case study provides a strategy of using the established theories, methods, A concurrent engineering approach for tolerance allocation has been investigated in this paper as a primary requirement for designers at an early design stage. An integrated tolerance synthesis model has been developed which provides a strategy to view tolerance synthesis in a global perspective. A step-by-step tolerance allocation process forms an integral part of this model and has been applied to a simple rectangular part. The case study provides a strategy of using the established theories, methods, techniques, models and algorithms, parameters like customer requirements, design, performance, materials, manufacturing, assembly, quality, cost, experience, representation and aesthetics for tolerance synthesis for part and assembly design of flexible parts in a concurrent engineering environment. This is Part II of the paper and continues the application of tolerance allocation process from Assembly stage.The architecture developed can be used as a basis for development of customized software for tolerance allocation or development of an integrated application with any other design/analysis software like IDEAS, ABAQUS or Pro Engineer Word Microsoft , information on submission 11: John Dunn, Sean McCorkle, Wenyi Bi, Reginald Garbriel and Walter Lewis. Integrated Information about DNA Methylation in Cancers Abstract: Methylation of DNA at the 5 position of cytosine (5mC) is a covalent epigenetic modification that plays an important role in the control of gene expression and chromosome structure in mammalian cells. Determining the global pattern of DNA methylation, or the methylome and its variation in cells has become an area of considerable interest primarily because of its potential use as an early diagnostic biomarker for cancer. Most tumor cells exhibit hypomethylation of their genomes, but the promoters of certain tumor suppressor genes frequently are silenced in tumor cells through hypermethylation at cytosine-phosphate-guanosine dinucleotides, so called CpG sequences, in CpG islands that occur at or near promoters. However, for time being, the research papers, results and literatures on this subject have grown rapidly and scattered in different systems. Because there was lack of a comprehensive information system available to gain all correlated information into one picture, it forces biologists and/or biomedical scientists manually to search through scores of biomedical journals and related websites. In order to get rid of such manual works, our project was to develop an information system of CpG methylated DNA data in cancers for Biology Department at Brookhaven National Laboratory (BNL), U.S. DOE. The information system consisted of database system (back-end), and web application (front-end). The system was able to gain the information related to methylation, cancers by web browsers to query at a gene level. It yielded important insights CpG methylation and disseminated to the research community comprehensive CpG methylation data for many human cancers that are already described in the literature and for those we are finding with our PE-SACO procedure. PDF, information on submission Methylation of DNA at the 5 position of cytosine (5mC) is a covalent epigenetic modification that plays an important role in the control of gene expression and chromosome structure in mammalian cells. Determining the global pattern of DNA methylation, or the methylome and its variation in cells has become an area of considerable interest primarily because of its potential use as an early diagnostic biomarker for cancer. Most tumor cells exhibit hypomethylation of their genomes, but the promoters of certain tumor suppressor genes frequently are silenced in tumor cells through hypermethylation at cytosine-phosphate-guanosine dinucleotides, so called CpG sequences, in CpG islands that occur at or near promoters. However, for time being, the research papers, results and literatures on this subject have grown rapidly and scattered in different systems. Because there was lack of a comprehensive information system available to gain all correlated information into one picture, it forces biologists and/or biomedical scientists manually to search through scores of biomedical journals and related websites. In order to get rid of such manual works, our project was to develop an information system of CpG methylated DNA data in cancers for Biology Department at Brookhaven National Laboratory (BNL), U.S. DOE. The information system consisted of database system (back-end), and web application (front-end). The system was able to gain the information related to methylation, cancers by web browsers to query at a gene level. It yielded important insights CpG methylation and disseminated to the research community comprehensive CpG methylation data for many human cancers that are already described in the literature and for those we are finding with our PE-SACO procedure. PDF, information on submission 12: Suying Yang, Hongyan Piao, Li Zhang and Xiaobing Zheng. Research and Application of IDEA Algorithm in USB Security Key Abstract: Since the IDEA algorithm has the advantage of resisting difference analysis and correlation analysis, its work pattern of random feedback is applied to USB security key to improve the system security. By distributing key based on grouping by sum checkout and using low-high-bit method to simplify the modular exponentiation, it can improve operation efficiency of IDEA algorithm. Experiments show that this algorithm can present better performance for diffusivity. PDF, information on submission Since the IDEA algorithm has the advantage of resisting difference analysis and correlation analysis, its work pattern of random feedback is applied to USB security key to improve the system security. By distributing key based on grouping by sum checkout and using low-high-bit method to simplify the modular exponentiation, it can improve operation efficiency of IDEA algorithm. Experiments show that this algorithm can present better performance for diffusivity. PDF, information on submission 13: Yangming Ma. An Intelligent Agent-Oriented System for Integrating Network Security Devices and Handling Large Amount of Security Events Abstract: To integrate network security devices to make them act as a battle team and efficiently handle the large amount of security events produced by various network applications, Network Security Intelligent Centralized Management is a basic solution. In this paper, we introduce an intelligent agent-oriented Network Security Intelligent Centralized Management System, and give a description about the system model, mechanism, hierarchy of security events, data flow diagram, filtering and transaction and normalization of security events, clustering and merging algorithm, and correlation algorithm. The experiment shows that the system can significantly reduce false positives and improve the quality of security events. It brings convenience for security administrators to integrate security devices and deal with large security events. PDF, information on submission To integrate network security devices to make them act as a battle team and efficiently handle the large amount of security events produced by various network applications, Network Security Intelligent Centralized Management is a basic solution. In this paper, we introduce an intelligent agent-oriented Network Security Intelligent Centralized Management System, and give a description about the system model, mechanism, hierarchy of security events, data flow diagram, filtering and transaction and normalization of security events, clustering and merging algorithm, and correlation algorithm. The experiment shows that the system can significantly reduce false positives and improve the quality of security events. It brings convenience for security administrators to integrate security devices and deal with large security events. PDF, information on submission 15: Yujun Zheng and Jinyun Xue. Design and Implementation of Domain-Specific Language via Category Theoretic Computations Abstract: Domain specific languages (DSLs) provide appropriate built-in abstractions and notations in a particular problem domain, and have been suggested as means for developing highly adaptable and reliable software systems. The paper presents a theory-based framework to support domain-specific design and implementation. Focusing concern on reasoning about interactive relationships among software models and objects at different levels of abstraction and granularity, our framework provides a unified, categorial environment for intra-model composition and inter-model refinement of DSL specifications via category theoretic computations, and therefore enables a high-level of reusability and dynamic adaptability. PDF, information on submission Domain specific languages (DSLs) provide appropriate built-in abstractions and notations in a particular problem domain, and have been suggested as means for developing highly adaptable and reliable software systems. The paper presents a theory-based framework to support domain-specific design and implementation. Focusing concern on reasoning about interactive relationships among software models and objects at different levels of abstraction and granularity, our framework provides a unified, categorial environment for intra-model composition and inter-model refinement of DSL specifications via category theoretic computations, and therefore enables a high-level of reusability and dynamic adaptability. PDF, information on submission 16: Markus Pizka and Elmar Juergens. Tool-Supported Multi-Level Language Evolution Abstract: Through their high degree of specialization, domain specific languages Through their high degree of specialization, domain specific languages (DSLs) promise higher productivity and thus shorter development time and lower costs than general purpose programming languages. Since many domains are subject to continuous evolution, the associated DSLs inevitably have to evolve too, to retain their value. However, the continuous evolution of a DSL itself can be very expensive, since its compiler as well as existing words (i.e. programs) have to be adapted according to the changes to the DSL specification. These maintenance costs compromise the expected reduction of development costs and thus limit the success of domain specific languages in practice. This paper proposes a concept and a tool for the evolutionary development of domain specific languages. It provides language evolution operations that automate the adaptation of the compiler and existing DSL programs according to changes to the DSL specification. This significantly reduces the cost of DSL maintenance and paves the ground for bottom-up development of domain specific languages. The example of the evolutionary development of a description language for product catalog systems demonstrates the feasibility of the DSL development approach proposed in this paper, its benefits and the capabilities respectively limitations of the current implementation of the Language Evolver “Lever”. PDF, information on submission 17: QIAN Zhong-sheng, MIAO Huai-kou and HE Tao. Modeling Web Navigations Using UML and Z Abstract: Web applications are being employed to support a wide range of important activities. As Web applications are becoming more and more complex, there is an increasing concern about how to retrieve and navigate effectively to new information and new Web applications. This has driven many researchers to investigate both how people navigate within the Web applications, and how Web applications should be modeled and designed. The UML is widely used in industrial systems development and Z has proved to be an appropriate language to represent abstract UML models. In this work, we present a modeling approach to Web navigations for their formal representation combining UML diagrams with Z notations. We model web pages, hyperlinks, page-scoped variables and frames contained in the pages using Z. For the behavioral representation of a user in page navigations, we employ UML state diagrams. Page secure level psl for any page and user secure level usl for any user who uses the Web application are defined. Moreover, a Page Navigation Diagram for a Web application is given. To support the test and verification of the proposed model, a series of properties the model should satisfy are also presented. The proposed approach affords an underlying guideline for modeling Web navigations Web applications are being employed to support a wide range of important activities. As Web applications are becoming more and more complex, there is an increasing concern about how to retrieve and navigate effectively to new information and new Web applications. This has driven many researchers to investigate both how people navigate within the Web applications, and how Web applications should be modeled and designed. The UML is widely used in industrial systems development and Z has proved to be an appropriate language to represent abstract UML models. In this work, we present a modeling approach to Web navigations for their formal representation combining UML diagrams with Z notations. We model web pages, hyperlinks, page-scoped variables and frames contained in the pages using Z. For the behavioral representation of a user in page navigations, we employ UML state diagrams. Page secure level psl for any page and user secure level usl for any user who uses the Web application are defined. Moreover, a Page Navigation Diagram for a Web application is given. To support the test and verification of the proposed model, a series of properties the model should satisfy are also presented. The proposed approach affords an underlying guideline for modeling Web navigations combining UML with Z. PDF, information on submission 18: xiong xie and weishi zhang. The Semantic Specification of Component Adaptation and Composition Abstract: Software component adaptation is a crucial problem in component-based software engineering. In this paper, a component model is described firstly in mathematical specification. Three composition architectures are described in formal semantic, including sequential architecture, alternative architecture and parallel architecture. The conditions of component adaptation architecture are analyzed and the system will select automatically a proper architecture to adapt the components according to the architecture application conditions. The specification of the complex component can be obtained automatically based on the specification of the adapted components. To compose a component based on its semantics specification, the proposed architecture supports semantic representation of components and does not depend on the computing environment. The proposed approach in the paper offers a guarantee to the formal analysis of component composition and the validation of the proper component composition. PDF, information on submission Software component adaptation is a crucial problem in component-based software engineering. In this paper, a component model is described firstly in mathematical specification. Three composition architectures are described in formal semantic, including sequential architecture, alternative architecture and parallel architecture. The conditions of component adaptation architecture are analyzed and the system will select automatically a proper architecture to adapt the components according to the architecture application conditions. The specification of the complex component can be obtained automatically based on the specification of the adapted components. To compose a component based on its semantics specification, the proposed architecture supports semantic representation of components and does not depend on the computing environment. The proposed approach in the paper offers a guarantee to the formal analysis of component composition and the validation of the proper component composition. PDF, information on submission 22: Guohua Cui, Shejie Lu and Zhiyuan Liu. A Secure Data Transmission Protocol for Mobile Ad Hoc Networks Abstract: Secure routing and data transmission has stimulated wide interest in Ad Hoc network research since it is more vulnerable to attacks due to its structural characteristics. Several efficient and secure schemes have been proposed to protect the network from external attacks. However, there still lack very efficient ways to detect and resist internal attacks. Here we proposed a secure data transmission protocol (SDTP) based on the Reed-Solomon error-correct coding to achieve secure data transmission in a Byzantine attack-existing environment. This protocol can distinguish malicious behaviors from transmission errors and locate the malicious node accurately. The algorithms used in this protocol also apply to secure routing protocols. PDF, information on submission Secure routing and data transmission has stimulated wide interest in Ad Hoc network research since it is more vulnerable to attacks due to its structural characteristics. Several efficient and secure schemes have been proposed to protect the network from external attacks. However, there still lack very efficient ways to detect and resist internal attacks. Here we proposed a secure data transmission protocol (SDTP) based on the Reed-Solomon error-correct coding to achieve secure data transmission in a Byzantine attack-existing environment. This protocol can distinguish malicious behaviors from transmission errors and locate the malicious node accurately. The algorithms used in this protocol also apply to secure routing protocols. PDF, information on submission 23: Shejie Lu, Guohua Cui and Zhiyuan Liu. A Secure Multi-Routing Platform for Mobile Ad Hoc Networks Abstract: In mobile ad hoc networks, it is usually difficult to optimize the assignment of network router resources using a single type of routing protocol due to the differences in network scale, node moving mode and node distribution. Therefore, it is desirable to have nodes run multiple routing protocols simultaneously so that more than one protocol can be chosen to work jointly. Here we present a multiple routing platform for Ad In mobile ad hoc networks, it is usually difficult to optimize the assignment of network router resources using a single type of routing protocol due to the differences in network scale, node moving mode and node distribution. Therefore, it is desirable to have nodes run multiple routing protocols simultaneously so that more than one protocol can be chosen to work jointly. Here we present a multiple routing platform for Ad hoc networks based on the higher level of current routing protocols for this purpose. Meanwhile, the security mechanism is also taken into account. The formal approve of the security mechanism and the simulation results of the network performance demonstrate that the proposed multi-routing platform is practical in some complex applications. PDF, information on submission 24: Cyrille Valentin Artho, Christian Sommer and Shinichi Honiden. Model Checking Networked Programs in the Presence of Transmission Failures Abstract: Abstract Software model checkers work directly on single-process programs, but not on multiple processes. Conversion of processes into threads, combined with a network model, allows for model checking distributed applications, but does not cover potential communication failures. This paper contributes a fault model for model checking networked programs. If a naive fault model is used, spurious deadlocks may appear, due to the fact that certain processes are terminated before they can complete a necessary action. Such spurious deadlocks have to be suppressed, as implemented in our model checker extension. Our approach found several faults in existing applications, and scales well because exceptions generated by our tool can be checked individually. PDF, information on submission Abstract Software model checkers work directly on single-process programs, but not on multiple processes. Conversion of processes into threads, combined with a network model, allows for model checking distributed applications, but does not cover potential communication failures. This paper contributes a fault model for model checking networked programs. If a naive fault model is used, spurious deadlocks may appear, due to the fact that certain processes are terminated before they can complete a necessary action. Such spurious deadlocks have to be suppressed, as implemented in our model checker extension. Our approach found several faults in existing applications, and scales well because exceptions generated by our tool can be checked individually. PDF, information on submission 25: Luo Wei. Boosting Reliability in Fault-tolerant Heterogeneous Systems Through Dynamic Scheduling Abstract: The most important feature of heterogeneous distributed systems is their great variance of the computing power and reliability of different processors, therefore, reliability should be considered when designing an algorithm for heterogeneous distributed systems. However, most existing real-time tasks scheduling algorithms for distributed systems either ignore reliabilities and fault-tolerance, or merely support one special type of backup copy. In this paper, we propose a dynamic and reliability-driven real-time fault-tolerant scheduling algorithm on heterogeneous distributed systems(DYFARS). Primary-backup copy scheme is leveraged by DYFARS to tolerate both hardware and software failures. Most importantly, DYFARS employs reliability costs as its main objective to dynamically schedule independent, non-preemptive aperiodic tasks, therefore system reliability is enhanced without additional hardware costs. A salient difference between our DYFARS and existing scheduling approaches is that DYFARS considers backup copies in both active and passive forms; therefore, DYFARS is more flexible than the existing scheduling schemes in the literature. Finally, simulation experiments are carried out to compare DYFARS with existing similar algorithms, results The most important feature of heterogeneous distributed systems is their great variance of the computing power and reliability of different processors, therefore, reliability should be considered when designing an algorithm for heterogeneous distributed systems. However, most existing real-time tasks scheduling algorithms for distributed systems either ignore reliabilities and fault-tolerance, or merely support one special type of backup copy. In this paper, we propose a dynamic and reliability-driven real-time fault-tolerant scheduling algorithm on heterogeneous distributed systems(DYFARS). Primary-backup copy scheme is leveraged by DYFARS to tolerate both hardware and software failures. Most importantly, DYFARS employs reliability costs as its main objective to dynamically schedule independent, non-preemptive aperiodic tasks, therefore system reliability is enhanced without additional hardware costs. A salient difference between our DYFARS and existing scheduling approaches is that DYFARS considers backup copies in both active and passive forms; therefore, DYFARS is more flexible than the existing scheduling schemes in the literature. Finally, simulation experiments are carried out to compare DYFARS with existing similar algorithms, results shows that DYFARS is superior to existing algorithms regarding Schedulability and Reliability. PDF, information on submission 26: yunni xia, Hanpin Wang and Wanling Qu. Queuing analysis and performance evaluation of workflow through WFQN Abstract: Performance prediction is one of the most important research topics of workflow. To investigate the performance of workflow systems in queuing condition, this paper extends traditional WF-net into WFQN (WF queuing network), by modeling tasks as FIFS (first-in-first-service) queues and the source place as the input of tokens following poisson arrival process. Analytical methods are introduced to evaluate the queue-length, wait-time and completion-duration. The case study (especially the case of airline ticket booking application) shows that WFQN can model real-world workflow-based applications effectively. Through Monte-carlo simulations in the case study, we show analytical models are verified by simulative results. We also present a sensitivity analysis technique to identify performance bottle-necks of WFQN. This paper concludes with a comparison with relate work. PDF, information on submission Performance prediction is one of the most important research topics of workflow. To investigate the performance of workflow systems in queuing condition, this paper extends traditional WF-net into WFQN (WF queuing network), by modeling tasks as FIFS (first-in-first-service) queues and the source place as the input of tokens following poisson arrival process. Analytical methods are introduced to evaluate the queue-length, wait-time and completion-duration. The case study (especially the case of airline ticket booking application) shows that WFQN can model real-world workflow-based applications effectively. Through Monte-carlo simulations in the case study, we show analytical models are verified by simulative results. We also present a sensitivity analysis technique to identify performance bottle-necks of WFQN. This paper concludes with a comparison with relate work. PDF, information on submission 27: david streader and steve reeves. LSB Live and Safe B (Semantics for extended Event B) Abstract: We extend Event B by allowing operations to have a relational semantics that is both undefined and guarded outside of precondition. We define two lifted total relation semantics for extended Event B machines: Safe B for safty-only properties and and Live B for liveness properties. The usual Event B proof obligations, Safe, are sufficient to establish Safe B refinement. Satisfying Safe pluss a simple additional proof obligation ACT_REF is sufficient to extablish Live B refinement. The use of lifted, total relations both prevents the ambiguity of the unlifted relational semantics and prevents operations being clairvoyant. PDF, information on submission We extend Event B by allowing operations to have a relational semantics that is both undefined and guarded outside of precondition. We define two lifted total relation semantics for extended Event B machines: Safe B for safty-only properties and and Live B for liveness properties. The usual Event B proof obligations, Safe, are sufficient to establish Safe B refinement. Satisfying Safe pluss a simple additional proof obligation ACT_REF is sufficient to extablish Live B refinement. The use of lifted, total relations both prevents the ambiguity of the unlifted relational semantics and prevents operations being clairvoyant. PDF, information on submission 28: Jun Wang, Di Zheng and Quan-yuan Wu. Design Pattern based Replica Management for Load Balancing and Overload Control of the Complex Service-Oriented Applications Abstract: Web Service is a new application model for decentralized computing, and it is also an effective mechanism for the data and service integration on the web. Open distributed computer systems, such as the Internet, offer a variety of services, such as printing, data storage, remote file copying, remote login, web servers, and search services. With the rapid development of e-business, web based applications are developed from localization to globalization, from B2C to B2B, from centralized fashion to decentralized fashion However, for the complex service-oriented Web Service is a new application model for decentralized computing, and it is also an effective mechanism for the data and service integration on the web. Open distributed computer systems, such as the Internet, offer a variety of services, such as printing, data storage, remote file copying, remote login, web servers, and search services. With the rapid development of e-business, web based applications are developed from localization to globalization, from B2C to B2B, from centralized fashion to decentralized fashion However, for the complex service-oriented applications, the applications may be integrated by using the services across Internet, thus we should balance the load for the applications to enhance the resource’s utility and increase the throughput. To overcome the problem, one effective way is to make use of load balancing. Kinds of load balancing middleware have already been applied successfully in distributed computing. However, they don’t take the services types into consideration and for different services requested by clients the workload would be different out of sight. Furthermore, traditional load balancing middleware uses the fixed and static replica management and uses the load migration to relieve overload. However, for many complex service-oriented applications, the hosts may be heterogeneous and decentralized at all and load migration is not efficient for the existence of the delay. Thus, we employ a new design pattern based replica management approach to support fast response, hot-spot control and balanced resource allocation among different services. PDF, information on submission 29: Li Diao and Xilin Liu. The mechanism of Information Sharing in Supply Chain Based on Mobile-agent Abstract: The concept of Distributed Intelligent Mobile-agent is introduced into Mobile-agent based supply chain system; Mobile-agent based supply chain information sharing system that enhances the agility in the supply chain is built up and used in the producing process to share information. PDF, information on submission The concept of Distributed Intelligent Mobile-agent is introduced into Mobile-agent based supply chain system; Mobile-agent based supply chain information sharing system that enhances the agility in the supply chain is built up and used in the producing process to share information. PDF, information on submission 30: Tung-Hsiang Chou and Cheng-Su Wang. The dynamic web services of 3G/mobile eCommerce based on SOA Abstract: From 1990s, the eCommerce has increasingly been used for business service between enterprises and consumers. In the past, many telecommunication corporations have implemented a fewer functionalities on the internet and most of telecommunication corporations use traditional development methodologies to realize telecom services for their customers. These behaviors provide lower QoS (Quality of Service) and monotonous added-value services to their customers. Now, the 3G technology can provide more quickly transmission rate and diversity of added-value services to the customers. In order to speed up the development duration, this research will get rid of traditional development and adopt service oriented architecture to redesign the platform of 3G eCommerce environment. In From 1990s, the eCommerce has increasingly been used for business service between enterprises and consumers. In the past, many telecommunication corporations have implemented a fewer functionalities on the internet and most of telecommunication corporations use traditional development methodologies to realize telecom services for their customers. These behaviors provide lower QoS (Quality of Service) and monotonous added-value services to their customers. Now, the 3G technology can provide more quickly transmission rate and diversity of added-value services to the customers. In order to speed up the development duration, this research will get rid of traditional development and adopt service oriented architecture to redesign the platform of 3G eCommerce environment. In this research, we propose a dynamic web service of composition framework to support SOA approach for telecommunication business process. Then, we construct a fully new eCommerce environment to realize our intentions. PDF, information on submission 31: Wang Jun. A Modeling of Software Architecture Reliability Abstract: This paper introduces software architecture reliability estimation and some typical software reliability model based architecture. We modify model of software reliability estimation so that to improve precision of estimating software architecture reliability, and propose the method of how to use the modified model to make a reliability estimation in a synthetic architecture so that to expand the application domain of this model. At the same time, this paper proposed the simplified method of calculating software architecture reliability based on the state transition matrix. The improved model is validated by an application system in the paper, and the result show that the precision is effectively increased. PDF, information on submission This paper introduces software architecture reliability estimation and some typical software reliability model based architecture. We modify model of software reliability estimation so that to improve precision of estimating software architecture reliability, and propose the method of how to use the modified model to make a reliability estimation in a synthetic architecture so that to expand the application domain of this model. At the same time, this paper proposed the simplified method of calculating software architecture reliability based on the state transition matrix. The improved model is validated by an application system in the paper, and the result show that the precision is effectively increased. PDF, information on submission 33: Marc Aiguier Aiguier and Delphine Longuet. Test Selection Criteria for Modal Specifications of Reactive Systems Abstract: In the framework of functional testing from algebraic specifications, the strategy of test selection which has been widely and efficiently applied is based on axiom unfolding. In this paper, we propose to extend this selection strategy to a modal formalism used to specify dynamic and reactive systems. Such a work is then a first step to tackle testing of such systems more abstractly than most of the works dealing with what is called conformance testing. We get a higher level of abstraction since our specifications account for what is usually called underspecification, i.e. they do not denote a unique model but a class of models. Hence, the testing process can be applied at every design level. PDF, information on submission In the framework of functional testing from algebraic specifications, the strategy of test selection which has been widely and efficiently applied is based on axiom unfolding. In this paper, we propose to extend this selection strategy to a modal formalism used to specify dynamic and reactive systems. Such a work is then a first step to tackle testing of such systems more abstractly than most of the works dealing with what is called conformance testing. We get a higher level of abstraction since our specifications account for what is usually called underspecification, i.e. they do not denote a unique model but a class of models. Hence, the testing process can be applied at every design level. PDF, information on submission 34: dexin zhao, Zhiyong Feng, Guangquan Xu, Shizhan Chen and Qing Yu. Service Description Based on CW for Pervasive Service Discovery Abstract: Current existing service discovery mechanism is lack of representation about imprecise or vague information of service on semantic level. This paper proposes a novel approach that service advertisements or requests can be expressed in natural language involving linguistic variables, which is considered as one of the essential factors to achieve the vision of pervasive computing. This new service description method has Current existing service discovery mechanism is lack of representation about imprecise or vague information of service on semantic level. This paper proposes a novel approach that service advertisements or requests can be expressed in natural language involving linguistic variables, which is considered as one of the essential factors to achieve the vision of pervasive computing. This new service description method has three features by comparing with existing architectures: First, service description is on semantic level, especially allows users to express their requests involving imprecise or vague data in a natural way. Second, the approach is based on the theory of Computing with words (CW), which is defined to be an extension of fuzzy logic. Third, the architecture has ontology support which makes the service descriptions be understandable by all the participants. PDF, information on submission 37: Howard Barringer, Dov Gabbay and David Rydeheard. A Logical Framework for Monitoring and Evolving Software Components Abstract: We present a revision-based logical framework for modelling hierarchical assemblies of evolvable component systems. An evolvable component is a tight coupling of a pair of components, consisting of a supervisor and a supervisee, with the supervisor able to both monitor and evolve its supervisee. An evolvable component pair is itself a component so may have its own supervisor, or may be encapsulated as part of a larger component. We present a revision-based logical framework for modelling hierarchical assemblies of evolvable component systems. An evolvable component is a tight coupling of a pair of components, consisting of a supervisor and a supervisee, with the supervisor able to both monitor and evolve its supervisee. An evolvable component pair is itself a component so may have its own supervisor, or may be encapsulated as part of a larger component. Components are modelled as logical theories containing actions which describe state revisions. Supervisor components are modelled as theories which are logically at a meta-level to their supervisee. Revision actions at the meta-level describe theory changes in the supervisee at the object-level. These correspond to various evolutionary changes in the component. We present this framework and show how it enables us to describe the architecture and logical structure of evolvable systems. PDF, information on submission 38: Salvador Valerio Cavadini and Diego Alejandro Cheda. Slicing with Program Points Abstract: We present point slicing, a new slicing technique of imperative programs that uses a program point as criteria and computes slices by deleting sentences that are proved to be not reachables by executions including the criteria point. Point slicing gives an answer to the question "Which sentences can be executed when sentence p is executed?", very common in program testing, debugging, and understanding tasks not directly addressed, as far as we know, by other slicing techniques. We also show how to extend point slicing criteria to a set of program points and how the new technique can be also used to answer other usual question: "Which sentences are possibly executed when sentence p is executed in a program state satisfying condition \phi?" Because, minimal point slices are, in general, not computable, we provide definitions of safe approximations for We present point slicing, a new slicing technique of imperative programs that uses a program point as criteria and computes slices by deleting sentences that are proved to be not reachables by executions including the criteria point. Point slicing gives an answer to the question "Which sentences can be executed when sentence p is executed?", very common in program testing, debugging, and understanding tasks not directly addressed, as far as we know, by other slicing techniques. We also show how to extend point slicing criteria to a set of program points and how the new technique can be also used to answer other usual question: "Which sentences are possibly executed when sentence p is executed in a program state satisfying condition \phi?" Because, minimal point slices are, in general, not computable, we provide definitions of safe approximations for each type of point slice. PDF, information on submission 39: Ingo Feinerer and Gernot Salzer. Consistency and Minimality of UML Class Specifications with Multiplicities and Uniqueness Constraints Abstract: The Unified Modelling Language (UML) has become a universal tool for the formal object-oriented specification of hardand software. In particular, UML class diagrams and so-called multiplicities, which restrict the number of relations between objects, are essential when using UML for applications like the specification of admissible configurations of components. The Unified Modelling Language (UML) has become a universal tool for the formal object-oriented specification of hardand software. In particular, UML class diagrams and so-called multiplicities, which restrict the number of relations between objects, are essential when using UML for applications like the specification of admissible configurations of components. In this paper we give a formal definition of the semantics of UML class diagrams and multiplicities. We extend results obtained in the context of Entity Relationship diagrams to cover UML specific extensions like the (non-)uniqueness attribute of binary associations. We show that the consistency of such specifications can be checked in polynomial time, and give an algorithm for computing minimal configurations (models). The core of our approach is a translation of UML class diagrams to Diophantine inequations. PDF, information on submission 40: Qiuyu Zhang and Peili Wu. Role-based Access Control Model with Expanded User Set Abstract: By analyzing their respective advantages and disadvantages among the traditional access control models, task and role based access control, static and dynamic authority, the authors found that current access control models take little account of corporate organizational structure. By presenting corporate human resource organization, the authors introduced a role-based access control model with expanded user set, and described the model in detail as well. The core idea of the model is that each employee has a fixed functional role and several business roles. The static authority will be assigned to functional role and the dynamic authority will be assigned to business role. The authors also validated the model with formal description. PDF, information on submission By analyzing their respective advantages and disadvantages among the traditional access control models, task and role based access control, static and dynamic authority, the authors found that current access control models take little account of corporate organizational structure. By presenting corporate human resource organization, the authors introduced a role-based access control model with expanded user set, and described the model in detail as well. The core idea of the model is that each employee has a fixed functional role and several business roles. The static authority will be assigned to functional role and the dynamic authority will be assigned to business role. The authors also validated the model with formal description. PDF, information on submission 41: Xuede Zhan. A Formal Framework for Testing UML Statecharts Abstract: This paper introduces a method of formalizing syntax and semantics of UML statecharts with Z. According to this precise semantics, UML statecharts are transformed into FREE (Flattened Regular Expression) state models. Proper testing pre-orders and equivalences are intro-duced which allow to equate/distinguish systems on the basis of their interaction with the surrounding environ-ment, abstracting from their internal This paper introduces a method of formalizing syntax and semantics of UML statecharts with Z. According to this precise semantics, UML statecharts are transformed into FREE (Flattened Regular Expression) state models. Proper testing pre-orders and equivalences are intro-duced which allow to equate/distinguish systems on the basis of their interaction with the surrounding environ-ment, abstracting from their internal structure. The formal testing framework for UML Statecharts is express with Z. PDF, information on submission 42: kun xiao, shihong Chen and Jian Xiao. Aspect-oriented Dynamic Weaving Testing Based on Sequence Diagrams Abstract: It is the initial stage of the aspect-oriented software development. For the nature of dynamic weaving, it is difficult for the object-oriented testing and the procedure-oriented testing to test the aspect-oriented dynamic weaving. In this paper, it analyzes the properties of the dynamic weaving, and induces the constraints in the dynamic weaving. Then, it regards the dynamic weaving as process piling the crosscutting concerns up the core concerns, and gives the formation concepts of the dynamic weaving. Following, it appends the weaving semantics to the sequence diagrams and gives the constraint testing of the dynamic weaving based on the sequence diagrams. This method can give the testing cases through the dynamic weaving design directly. It solves the transition from the aspect-oriented design to the aspect-oriented testing in some manner. PDF, information on submission It is the initial stage of the aspect-oriented software development. For the nature of dynamic weaving, it is difficult for the object-oriented testing and the procedure-oriented testing to test the aspect-oriented dynamic weaving. In this paper, it analyzes the properties of the dynamic weaving, and induces the constraints in the dynamic weaving. Then, it regards the dynamic weaving as process piling the crosscutting concerns up the core concerns, and gives the formation concepts of the dynamic weaving. Following, it appends the weaving semantics to the sequence diagrams and gives the constraint testing of the dynamic weaving based on the sequence diagrams. This method can give the testing cases through the dynamic weaving design directly. It solves the transition from the aspect-oriented design to the aspect-oriented testing in some manner. PDF, information on submission 43: Lin Rong-De and Xi Jian-Qing. Model Checking Mobile Ambients based on Evolution Semantics Abstract: A model checking method for the ambient calculus based on evolution semantic is presented. Firstly an abstract spatial-temporal structure is constructed from a fragment of the ambient calculus with replication-free and communication primitives. Next, the operators and the evolution rules, evolution Büchi automata (EBA), and the evolution traces of spatial-temporal world are defined. And then an evolution temporal logic (ETL) with spatial predicates and evolutional predicates is derived from the first-order temporal logic, which would be applied to specify such as safety and liveness properties of the ambient calculus, and the semantic of an ETL formula under the circumstance of the spatial-temporal evolution is proposed. Finally, a tableau-based model checking method for verifying the properties of an ETL formula against an EBA is carried out. PDF, information on submission A model checking method for the ambient calculus based on evolution semantic is presented. Firstly an abstract spatial-temporal structure is constructed from a fragment of the ambient calculus with replication-free and communication primitives. Next, the operators and the evolution rules, evolution Büchi automata (EBA), and the evolution traces of spatial-temporal world are defined. And then an evolution temporal logic (ETL) with spatial predicates and evolutional predicates is derived from the first-order temporal logic, which would be applied to specify such as safety and liveness properties of the ambient calculus, and the semantic of an ETL formula under the circumstance of the spatial-temporal evolution is proposed. Finally, a tableau-based model checking method for verifying the properties of an ETL formula against an EBA is carried out. PDF, information on submission 45: Xiaoting Li, Lei Zhang and Jianjing Shen. Research and Implementation of a Business Architecture Platform System Based on SOA Abstract: As a result of the current system integration approaches can not completely meet for the market requirements and own requirements of enterprise, it is a hotspot of research that how to use technologies of SOA and Web Services to realize enterprise information integration. In this paper, we bring a key idea of implementation of Business Architecture As a result of the current system integration approaches can not completely meet for the market requirements and own requirements of enterprise, it is a hotspot of research that how to use technologies of SOA and Web Services to realize enterprise information integration. In this paper, we bring a key idea of implementation of Business Architecture Platform in a specific system integration project. We explained the important role of SOA on implementation of system integration of BAP through the system analysis and design of BAP. PDF, information on submission 46: Paolo Zuliani. A formal derivation of Grover's quantum search algorithm Abstract: In this paper we aim at applying established formal methods techniques to a recent software area: quantum programming. In particular, we aim at providing a stepwise derivation of Grover's quantum search algorithm. As the quantum programming model is a non-trivial generalisation of the standard one, it is essential, from a software engineering point of view, to understand whether standard techniques can cope with the quantum case. Our work shows that, in principle, traditional software engineering techniques such as specification and refinement can be applied to quantum programs. We have chosen Grover's algorithm as an example because it is one of the two main quantum algorithms. The algorithm can find with high probability an element in an unordered array of length $L$ in just $O(\sqrt{L})$ steps (while any classical probabilistic algorithm needs $\Omega(L)$ steps). The derivation starts from a rigorous probabilistic specification of the search problem, then we stepwise refine that specification via standard refinement laws and quantum laws, until we arrive at a quantum program. The final program will thus be correct by construction. PDF, information on submission In this paper we aim at applying established formal methods techniques to a recent software area: quantum programming. In particular, we aim at providing a stepwise derivation of Grover's quantum search algorithm. As the quantum programming model is a non-trivial generalisation of the standard one, it is essential, from a software engineering point of view, to understand whether standard techniques can cope with the quantum case. Our work shows that, in principle, traditional software engineering techniques such as specification and refinement can be applied to quantum programs. We have chosen Grover's algorithm as an example because it is one of the two main quantum algorithms. The algorithm can find with high probability an element in an unordered array of length $L$ in just $O(\sqrt{L})$ steps (while any classical probabilistic algorithm needs $\Omega(L)$ steps). The derivation starts from a rigorous probabilistic specification of the search problem, then we stepwise refine that specification via standard refinement laws and quantum laws, until we arrive at a quantum program. The final program will thus be correct by construction. PDF, information on submission 47: Yiyun Chen, Lin Ge, Baojian Hua, Zhaopeng Li and Cheng Liu. Design of a Certifying Compiler Supporting Proof of Program Safety Abstract: Safety is an important property of high-assurance software, and one of the hot research topics on it is the verification method for software to meet its safety policies. In our previous work, we have designed a pointer logic system and proposed a framework for developing and verifying safety-critical programs. And in this paper, we present the design and implementation of a certifying compiler based on that framework. The main parts we will explain here include verification condition generation, generation of code and assertions, and proof generation for basic blocks. Our certifying compiler has the following novel features: 1) it supports a Safety is an important property of high-assurance software, and one of the hot research topics on it is the verification method for software to meet its safety policies. In our previous work, we have designed a pointer logic system and proposed a framework for developing and verifying safety-critical programs. And in this paper, we present the design and implementation of a certifying compiler based on that framework. The main parts we will explain here include verification condition generation, generation of code and assertions, and proof generation for basic blocks. Our certifying compiler has the following novel features: 1) it supports a programming language equipped with both a type system and a logic system; 2) and it can produce safety proofs for programs with pointers. PDF, information on submission 48: Lingzhong Zhao, Tianlong Gu and Junyan Qian. Goal-independent Semantics for Path Dependent Analysis of Prolog Programs Abstract: Considering the execution path and cut operators of Prolog program can improve the precision of program analysis. Known semantics for Prolog either makes use of limited amount of path information and hence leads to less precise analysis or is goal dependent and therefore not suitable for goal independent program analysis. This paper deals with the problems by proposing a goal-independent denotational semantics for Prolog with cut, from which we can compute the set of partially computed answers associated to each program point that are obtained in the execution of any goal. With existing abstraction techniques this semantics can be abstracted into finitely computable semantics that can serve as a base for goal-independent Prolog program analysis. PDF, information on submission Considering the execution path and cut operators of Prolog program can improve the precision of program analysis. Known semantics for Prolog either makes use of limited amount of path information and hence leads to less precise analysis or is goal dependent and therefore not suitable for goal independent program analysis. This paper deals with the problems by proposing a goal-independent denotational semantics for Prolog with cut, from which we can compute the set of partially computed answers associated to each program point that are obtained in the execution of any goal. With existing abstraction techniques this semantics can be abstracted into finitely computable semantics that can serve as a base for goal-independent Prolog program analysis. PDF, information on submission 49: Wen-jun LI, Xiao-jun Liang, Hua-mei SONG and Xiao-cong ZHOU. QoS-Driven Service Composition Modeling with Extended Hierarchical CPN Abstract: Quality of Service (QoS) plays an important role in the composition of services. A Colored Petri Net (CPN) based approach is proposed to model and analyze QoS-driven service composition in a Service-Oriented Architecture (SOA), where all services are considered as dynamic joining and quitting resources. Modeling with an extension of the hierarchical CPN, which is named QSC-net, the QoS-driven features are expressed in the service composition model explicitly, along with the dynamic changing of service resources and the uncertain execution of composition process at runtime. QSC-nets support not only the classical property analysis provided by CPN tools, but also new properties concerning the maximal concurrency of services in the composition model. Compared with existing Petri Net based modeling approaches, QSC-nets are specially suitable for the modeling and analysis of service composition in a dynamic environment, such as a Virtual Organization (VO) defined by Grid computing. PDF, information on submission Quality of Service (QoS) plays an important role in the composition of services. A Colored Petri Net (CPN) based approach is proposed to model and analyze QoS-driven service composition in a Service-Oriented Architecture (SOA), where all services are considered as dynamic joining and quitting resources. Modeling with an extension of the hierarchical CPN, which is named QSC-net, the QoS-driven features are expressed in the service composition model explicitly, along with the dynamic changing of service resources and the uncertain execution of composition process at runtime. QSC-nets support not only the classical property analysis provided by CPN tools, but also new properties concerning the maximal concurrency of services in the composition model. Compared with existing Petri Net based modeling approaches, QSC-nets are specially suitable for the modeling and analysis of service composition in a dynamic environment, such as a Virtual Organization (VO) defined by Grid computing. PDF, information on submission 50: Hongwei Zeng and Huaikou Miao. Specification-based Test Generation and Optimization Abstract: The capability of model checkers to construct counterexamples provides a basis for automated test generation. However, many model checking-based testing approaches just focus on generating test sets with The capability of model checkers to construct counterexamples provides a basis for automated test generation. However, many model checking-based testing approaches just focus on generating test sets with respect to some coverage criteria. Such test sets generally are large and inefficient because of much redundancy. We propose an on-the-fly approach to test generation and redundancy elimination by turns. Our approach employs a test-tree to pick out and represent a subset of tests with equal coverage for a test criterion and no redundancy. Along with model checking for a property, a new test sequence is derived from the counterexample and is used to detect redundant properties, and then is winnowed by the test-tree as well. We demonstrate the approach by applying some smaller examples to our prototyped algorithm. PDF, information on submission 52: Chengying Mao. Built-in Regression Testing for Component-based Software Systems Abstract: Component-based software has been widely used in various application domains and becomes a fairly popular software form. However, some specialties of component, such as high evolvability, implementation transparent, and limited access support, bring a great challenge for testing the systems built by externally-provided components, especially for regression testing. Built-in test design is a fairly effective way to improve component's testability. In this paper, we present a built-in regression testing method to validate the change and its impact of component-based software, which needs the mutual collaboration between the component developers and component users. Through employing preliminary experiments on some medium scale systems, our regression testing method based on built-in test design has been proven to be fairly feasible and cost-effective in practice. Although our method indicates the same precision as Orso et al.'s method at statement level, it needs less exchanged information (i.e., meta-data) and test scripts, so it is more cost-effective. PDF, information on submission Component-based software has been widely used in various application domains and becomes a fairly popular software form. However, some specialties of component, such as high evolvability, implementation transparent, and limited access support, bring a great challenge for testing the systems built by externally-provided components, especially for regression testing. Built-in test design is a fairly effective way to improve component's testability. In this paper, we present a built-in regression testing method to validate the change and its impact of component-based software, which needs the mutual collaboration between the component developers and component users. Through employing preliminary experiments on some medium scale systems, our regression testing method based on built-in test design has been proven to be fairly feasible and cost-effective in practice. Although our method indicates the same precision as Orso et al.'s method at statement level, it needs less exchanged information (i.e., meta-data) and test scripts, so it is more cost-effective. PDF, information on submission 53: Cong-Cong Xing. An Object Type Graph System Abstract: While object types are an abstract specification of object behaviors, object behaviors are significantly affected by method interdependencies in objects. Conventionally, method interdependency information of objects is not reflected in object types. As a result, objects with sufficiently distinct behaviors can be confused to have the same type in conventional type systems, which, among other things, opens the door to let more faulty programs to be compiled and thus weakens the reliability of programs. In this paper, we (1) introduce the notion of object type graphs (OTG) While object types are an abstract specification of object behaviors, object behaviors are significantly affected by method interdependencies in objects. Conventionally, method interdependency information of objects is not reflected in object types. As a result, objects with sufficiently distinct behaviors can be confused to have the same type in conventional type systems, which, among other things, opens the door to let more faulty programs to be compiled and thus weakens the reliability of programs. In this paper, we (1) introduce the notion of object type graphs (OTG) which captures method interdependencies and integrates them into object types; (2) define object typing and subtyping under OTG; (3) demonstrate how problems existing in conventional type systems can be easily resolved under OTG; (4) present an algorithm for computing object method interdependencies; and (5) provide a soundness proof of the OTG system. We argue that OTG system is one step forward towards increasing the reliability of programs. PDF, information on submission 54: Naiyong Jin and Chengjie Shen. Dynamic Verifying The Properties of The Simple Subset of PSL Abstract: PSL is a standard specification language(IEEE-1850) for hardware and embedded system design. The simple subset of PSL conforms to the notion of monotonic advancement of time, which in turn ensures that formulas within the subset can be simulated easily. Dynamic verifiers consider only \emph{finite} executions which may be too short to assure their satisfaction to some formulas. PSL is a standard specification language(IEEE-1850) for hardware and embedded system design. The simple subset of PSL conforms to the notion of monotonic advancement of time, which in turn ensures that formulas within the subset can be simulated easily. Dynamic verifiers consider only \emph{finite} executions which may be too short to assure their satisfaction to some formulas. In this paper, we examine the theories for designing dynamic verifiers for the simple subset of $PSL^{simple}$. We first study the formalism for the strength of formula satisfaction over finite words. Then, we explore the combinational properties of finite words with respect to strong satisfaction, weak satisfaction and strong violation. That contributes to acceptance conditions of automata which recognize $PSL^{simple}$ formulas over finite words. In the end, we provide a full set of construction rules for alternating automata from $PSL^{simple}$ formulas. The contribution of this paper lies in two aspects. Firstly, our study covers all fragments of $PSL^{simple}$, including LTL formulas and repentance-enhanced SEREs. Secondly, our automata can recognize all types of acceptances, such as strong satisfaction, weak satisfaction and strong violation. PDF, information on submission 55: Qian ying. A Complex Schema Matching System Abstract: Schema matching, the problem of finding semantic correspondences between elements of two schemas, plays a key role in many applications, such as data warehouse, heterogeneous data sources integration and semantic Web. The existing approaches to automating schema matching almost focus on computing direct element matches (1:1 Schema matching, the problem of finding semantic correspondences between elements of two schemas, plays a key role in many applications, such as data warehouse, heterogeneous data sources integration and semantic Web. The existing approaches to automating schema matching almost focus on computing direct element matches (1:1 matches) between two schemas. However, relationships between real-world schemas involve many complex matches besides 1:1 matches. At present, there are few methods can discover complex matches, such as iMAP, but they have bad matching efficiency, because the candidate matches space is so large that they need searching. A complex schema matching system called CSM is introduced in this paper. Firstly it can filter unreasonable matches on data types and values by preprocessor and clustering processor, and employs a set of special-purpose searchers in match generator to explore a specialized portion of the search space and discovers 1:1 and complex matches. Then it estimates candidate matches and selects optimal candidate matches by using similarity estimator and match selector respectively. Finally, according to the problem that there are opaque columns in the schemas being matched, it can apply complementary matcher to discover matching relations between opaque columns further more. Thereby it can discover more general, reasonable matching pairs. Experiments show that, CSM does not only discover matches between schemas roundly, but also improve the matching recall and precision in practice. PDF, information on submission 57: Ansgar Fehnker, Ralf Huuck, Patrick Jayet, Michel Lussenburg and Felix Rauch. Model Checking Software at Compile Time Abstract: Software has been under scrutiny by the verification community from various angles in the recent past. There are two major algorithmic approaches to ensure the correctness of and to eliminate bugs from such systems: software model checking and static analysis. Those approaches are typically complementary. In this paper we use a model checking approach to solve static analysis problems. This not only avoids the scalability and abstraction issues typically associated with model checking, it allows for specifying new properties in a concise and elegant way, scales well to large code bases, and the built-in optimizations of modern model checkers enable scalability also in terms of numbers of properties to be checked. In particular, we present \emph{Goanna,} the first C/C++ static source code analyzer using the off-the-shelf model checker NuSMV, and we demonstrate Goanna's suitability for developer machines by evaluating its run-time performance, memory consumption and scalability using the source code of OpenSSL as a test bed. PDF, information on submission Software has been under scrutiny by the verification community from various angles in the recent past. There are two major algorithmic approaches to ensure the correctness of and to eliminate bugs from such systems: software model checking and static analysis. Those approaches are typically complementary. In this paper we use a model checking approach to solve static analysis problems. This not only avoids the scalability and abstraction issues typically associated with model checking, it allows for specifying new properties in a concise and elegant way, scales well to large code bases, and the built-in optimizations of modern model checkers enable scalability also in terms of numbers of properties to be checked. In particular, we present \emph{Goanna,} the first C/C++ static source code analyzer using the off-the-shelf model checker NuSMV, and we demonstrate Goanna's suitability for developer machines by evaluating its run-time performance, memory consumption and scalability using the source code of OpenSSL as a test bed. PDF, information on submission 58: Fei Xu and Li Zhang. Modeling Collaboration Business processes using Petri nets and Pi calculus Abstract: Awareness of the need for process orientation in IT support field Awareness of the need for process orientation in IT support field has been increasing these decades, which made the modeling and analyzing of business process become more and more significant. Among various formal methods, Petri nets were applied in workflow management, mainly because its visible feature based on the rigors graph theory. As another potential candidate, Pi calculus, a branch of process algebra, proved to be more capable in modeling mobility and interaction. But there were no formal methods to integrate these two formal methods, even this work may be of great significance. In this paper, based on the analysis of both Petri nets and Pi calculus, we will introduce a mapping model between those two formal methods, which integrates the workflow model of Petri nets and the interaction model of Pi calculus to describe collaboration business processes. PDF, information on submission 59: Chunxiao Lin, Andrew McCreight, Zhong Shao, Yiyun Chen and Yu Guo. Foundational Typed Assembly Language with Certified Garbage Collection Abstract: Type-directed certifying compilation and typed assembly language (TAL) aim to minimize the trusted computing base of safe languages by directly type checking machine code. However, the safety of TAL still heavily relies on its safe interaction with the underlying garbage collector. Based on a recent variant of foundational proof-carrying code (FPCC), we introduce a general methodology for building foundational TAL with certified garbage collection. The usability of this methodology has been demonstrated by linking a typical TAL with a conservative garbage collector, which includes proving the safety of the collector, the soundness of TAL, and the safe interaction between TAL programs and the garbage collector. Our work is fully mechanized within the Coq proof assistant and the linked programs can be shipped immediately as FPCC packages. PDF, information on submission Type-directed certifying compilation and typed assembly language (TAL) aim to minimize the trusted computing base of safe languages by directly type checking machine code. However, the safety of TAL still heavily relies on its safe interaction with the underlying garbage collector. Based on a recent variant of foundational proof-carrying code (FPCC), we introduce a general methodology for building foundational TAL with certified garbage collection. The usability of this methodology has been demonstrated by linking a typical TAL with a conservative garbage collector, which includes proving the safety of the collector, the soundness of TAL, and the safe interaction between TAL programs and the garbage collector. Our work is fully mechanized within the Coq proof assistant and the linked programs can be shipped immediately as FPCC packages. PDF, information on submission 62: Bouhdadi Mohamed, Balouki Youssef and Chabbar El maati. A DENOTATIONAL SEMANTICS FOR STRUCTURAL CONCEPTS IN ODP ENTERPRISE LANGUAGE Abstract: The Reference Model for Open Distributed Processing (RM-ODP) RM-ODP is a meta-norm for other ODP standards which have to be defined. It defines a generic object and action model, and architecture for the development of ODP systems in terms of five viewpoints. We treat in this paper the need of formal specification and notation for the enterprise viewpoint language. Indeed, the viewpoint languages are abstract in the sense that they define what concepts should be supported not how these concepts should be represented. The UML standard has adopted a meta-modeling approach to defining the abstract syntax of UML. One The Reference Model for Open Distributed Processing (RM-ODP) RM-ODP is a meta-norm for other ODP standards which have to be defined. It defines a generic object and action model, and architecture for the development of ODP systems in terms of five viewpoints. We treat in this paper the need of formal specification and notation for the enterprise viewpoint language. Indeed, the viewpoint languages are abstract in the sense that they define what concepts should be supported not how these concepts should be represented. The UML standard has adopted a meta-modeling approach to defining the abstract syntax of UML. One approach to define the formal semantics of a language is denotational: essentially elaborating the value or instance denoted by an expression of the language in a particular context. Combining the UML/OCL meta-modeling approach and the denotataional semantics we define in this paper syntax and semantics for a fragment of ODP object concepts defined in the RM-ODP foundations part and in the enterprise language. These specification concepts are suitable for describing and constraining ODP enterprise viewpoint specifications. PDF, information on submission 63: Liangli MA, Houxiang Wang and Yongjie LI. Construct Metadata based on Change Model using for Component-based Software Regression Testing Abstract: Component metadata is one of the most effective methods to improve the testability of component-based software. Based on describing the metadata method presented by Orso, we give the definition of component and summarize all possible changes of component. All these changes are further classified into two types and their change cases are described in detailed. Then component changes are described as a two-tuple and a change model is constructed. We present the conceptions of mapping mechanism to implement changes within component mapping to interface changes of component. And we introduce the conceptions of present change-interface mapping graph(C-IMG), change-interface dependency relationship(C-IDR),method dependency graph (MDG) and component interface model(CIM) to describe the mapping mechanism. Furthermore, relative mapping algorithm is given based on above mapping mechanism. In the following, we apply above model research to component RegisterStuGrade developed by ourselves and compare with regression testing technique without metadata and Orso method in regression testing cases number and running time to verify the validity of our method.
منابع مشابه
The Comparison of Imperialist Competitive Algorithm Applied and Genetic Algorithm for Machining Allocation of Clutch Assembly (TECHNICAL NOTE)
The allocation of design tolerances between the components of a mechanical assembly and manufacturing tolerances can significantly affect the functionality of products and related production costs. This paper introduces Imperialist Competitive Algorithm (ICA) approach to solve the machining tolerance allocation of an overrunning clutch assembly. The objective is to obtain optimum tolerances of...
متن کاملThe Finite Horizon Economic Lot Scheduling in Flexible Flow Lines
This paper addresses the common cycle multi-product lot-scheduling problem in flexible flow lines (FFL) where the product demands are deterministic and constant over a finite planning horizon. Objective is minimizing the sum of setup costs, work-in-process and final products inventory holding costs per time unite while satisfying the demands without backlogging. This problem consists of a combi...
متن کاملParametric Tolerance Analysis of Mechanical Assembly Using FEA and Cost Competent Tolerance Synthesis Using Neural Network
Tolerance design plays an important role in the modern design process by introducing quality improvements and limiting manufacturing costs. Tolerance synthesis is a procedure that distributes assembly tolerances between components or distributes final part design tolerances between related tolerances. Traditional tolerance design assumes that all objects have rigid geometry, overlooking the rol...
متن کاملDesiging Tabriz Carpet Art Museum Architecture Based on the Blue Mosque Historical Context Identity Potentials
Due to universal importance of Tabriz carpet art and presence of specific indicators and motifs in Tabriz carpet art and its potentials, existence of architectural site for representing this art as carpet art museum in Tabriz is necessary. The site of this project is located adjacent to Tabriz Blue Mosque near the museum historical site. It is essential to consider the historical elements and c...
متن کاملImproving Architectural Design Skills with Design-Based Learning of New Structures
The purposeful and applied learning of Structures as a pillar of architectural design is very important. The current educational content of Structures in architecture departments is based on theoretical discussions, mathematical formulas, and lecture-oriented material. As a result, students are incompetent in applying practical concepts and structural formal analyses to architectural design. Ef...
متن کاملA Hierarchical Production Planning and Finite Scheduling Framework for Part Families in Flexible Job-shop (with a case study)
Tendency to optimization in last decades has resulted in creating multi-product manufacturing systems. Production planning in such systems is difficult, because optimal production volume that is calculated must be consistent with limitation of production system. Hence, integration has been proposed to decide about these problems concurrently. Main problem in integration is how we can relate pro...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007